Millions of surveillance cameras operate at 24x7 generating huge amount ofvisual data for processing. However, retrieval of important activities fromsuch a large data can be time consuming. Thus, researchers are working onfinding solutions to present hours of visual data in a compressed, butmeaningful way. Video synopsis is one of the ways to represent activities usingrelatively shorter duration clips. So far, two main approaches have been usedby researchers to address this problem, namely synopsis by tracking movingobjects and synopsis by clustering moving objects. Synopses outputs, mainlydepend on tracking, segmenting, and shifting of moving objects temporally aswell as spatially. In many situations, tracking fails, thus produces multipletrajectories of the same object. Due to this, the object may appear anddisappear multiple times within the same synopsis output, which is misleading.This also leads to discontinuity and often can be confusing to the viewer ofthe synopsis. In this paper, we present a new approach for generatingcompressed video synopsis by grouping tracklets of moving objects. Groupinghelps to generate a synopsis where chronologically related objects appeartogether with meaningful spatio-temporal relation. Our proposed method producescontinuous, but a less confusing synopses when tested on publicly availabledataset videos as well as in-house dataset videos.
展开▼